Since their inception more than 50 years ago, Brain-Computer Interfaces (BCIs) have held promise to compensate for functions lost by people with disabilities through allowing direct communication between the brain and external devices. While research throughout the past decades has demonstrated the feasibility of BCI to act as a successful assistive technology, the widespread use of BCI outside the lab is still beyond reach. This can be attributed to a number of challenges that need to be addressed for BCI to be of practical use including limited data availability, limited temporal and spatial resolutions of brain signals recorded non-invasively and intersubject variability. In addition, for a very long time, BCI development has been mainly confined to specific simple brain patterns, while developing other BCI applications relying on complex brain patterns has been proven infeasible. Generative Artificial Intelligence (GAI) has recently emerged as an artificial intelligence domain in which trained models can be used to generate new data with properties resembling that of available data. Given the enhancements observed in other domains that possess similar challenges to BCI development, GAI has been recently employed in a multitude of BCI development applications to generate synthetic brain activity; thereby, augmenting the recorded brain activity. Here, a brief review of the recent adoption of GAI techniques to overcome the aforementioned BCI challenges is provided demonstrating the enhancements achieved using GAI techniques in augmenting limited EEG data, enhancing the spatiotemporal resolution of recorded EEG data, enhancing cross-subject performance of BCI systems and implementing end-to-end BCI applications. GAI could represent the means by which BCI would be transformed into a prevalent assistive technology, thereby improving the quality of life of people with disabilities, and helping in adopting BCI as an emerging human-computer interaction technology for general use.
Loading....